5 research outputs found

    A modular distributed transactional memory framework

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThe traditional lock-based concurrency control is complex and error-prone due to its low-level nature and composability challenges. Software transactional memory (STM), inherited from the database world, has risen as an exciting alternative, sparing the programmer from dealing explicitly with such low-level mechanisms. In real world scenarios, software is often faced with requirements such as high availability and scalability, and the solution usually consists on building a distributed system. Given the benefits of STM over traditional concurrency controls, Distributed Software Transactional Memory (DSTM) is now being investigated as an attractive alternative for distributed concurrency control. Our long-term objective is to transparently enable multithreaded applications to execute over a DSTM setting. In this work we intend to pave the way by defining a modular DSTM framework for the Java programming language. We extend an existing, efficient, STM framework with a new software layer to create a DSTM framework. This new layer interacts with the local STM using well-defined interfaces, and allows the implementation of different distributed memory models while providing a non-intrusive, familiar,programming model to applications, unlike any other DSTM framework. Using the proposed DSTM framework we have successfully, and easily, implemented a replicated STM which uses a Certification protocol to commit transactions. An evaluation using common STM benchmarks showcases the efficiency of the replicated STM,and its modularity enables us to provide insight on the relevance of different implementations of the Group Communication System required by the Certification scheme, with respect to performance under different workloads.Fundação para a Ciência e Tecnologia - project (PTDC/EIA-EIA/113613/2009

    Executing requests concurrently in state machine replication

    Get PDF
    State machine replication is one of the most popular ways to achieve fault tolerance. In a nutshell, the state machine replication approach maintains multiple replicas that both store a copy of the system’s data and execute operations on that data. When requests to execute operations arrive, an “agree-execute” protocol keeps replicas synchronized: they first agree on an order to execute the incoming operations, and then execute the operations one at a time in the agreed upon order, so that every replica reaches the same final state. Multi-core processors are the norm, but taking advantage of the available processor cores to execute operations simultaneously is at odds with the “agree-execute” protocol: simultaneous execution is inherently unpredictable, so in the end replicas may arrive at different final states and the system becomes inconsistent. On one hand, we want to take advantage of the available processor cores to execute operations simultaneously and improve performance. But on the other hand, replicas must abide by the operation order that they agreed upon for the system to remain consistent. This dissertation proposes a solution to this dilemma. At a high level, we propose to use speculative execution techniques to execute operations simultaneously while nonetheless ensuring that their execution is equivalent to having executed the operations sequentially in the order the replicas agreed upon. To achieve this, we: (1) propose to execute operations as serializable transactions, and (2) develop a new concurrency control protocol that ensures that the concurrent execution of a set of transactions respects the serialization order the replicas agreed upon. Since speculation is only effective if it is successful, we also (3) propose a modification to the typical API to declare transactions, which allows transactions to execute their logic over an abstract replica state, resulting in fewer conflicts between transactions and thus improving the effectiveness of the speculative executions. An experimental evaluation shows that the contributions in this dissertation can improve the performance of a state-machine-replicated server up to 4 , reaching up to 75% the performance of a concurrent fault-prone server

    Pervasive gaps in Amazonian ecological research

    Get PDF

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear un derstanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5–7 vast areas of the tropics remain understudied.8–11 In the American tropics, Amazonia stands out as the world’s most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepre sented in biodiversity databases.13–15 To worsen this situation, human-induced modifications16,17 may elim inate pieces of the Amazon’s biodiversity puzzle before we can use them to understand how ecological com munities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple or ganism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region’s vulnerability to environmental change. 15%–18% of the most ne glected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lostinfo:eu-repo/semantics/publishedVersio

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost
    corecore